1179 stories
·
0 followers

Scientists Shocked To Find Lab Gloves May Be Skewing Microplastics Data

1 Share
Researchers found that common nitrile and latex lab gloves can shed stearate particles that closely resemble microplastics, potentially "increasing the risk of false positives when studying microplastic pollution," reports ScienceDaily. "We may be overestimating microplastics, but there should be none," said Anne McNeil, senior author of the study and U-M professor of chemistry, macromolecular science and engineering. "There's still a lot out there, and that's the problem." From the report: Researchers found that these gloves can unintentionally transfer particles onto lab tools used to analyze air, water, and other environmental samples. The contamination comes from stearates, which are not plastics but can closely resemble them during testing. Because of this, scientists may be detecting particles that are not true microplastics. To reduce this issue, U-M researchers Madeline Clough and Anne McNeil recommend using cleanroom gloves, which release far fewer particles. Stearates are salt-based, soap-like substances added to disposable gloves to help them separate easily from molds during manufacturing. However, their chemical similarity to certain plastics makes them difficult to distinguish in lab analyses, increasing the risk of false positives when studying microplastic pollution. "For microplastics researchers who have these impacted datasets, there's still hope to recover them and find a true quantity of microplastics," said researcher and recent doctoral graduate Madeline Clough. "This field is very challenging to work in because there's plastic everywhere," McNeil said. "But that's why we need chemists and people who understand chemical structure to be working in this field." The findings have been published in the journal Analytical Methods.

Read more of this story at Slashdot.

Read the whole story
Share this story
Delete

AI Data Centers Can Warm Surrounding Areas By Up To 9.1C

1 Share
An anonymous reader quotes a report from New Scientist: Andrea Marinoni at the University of Cambridge, UK, and his colleagues saw that the amount of energy needed to run a data centre had been steadily increasing of late and was likely to "explode" in the coming years, so wanted to quantify the impact. The researchers took satellite measurements of land surface temperatures over the past 20 years and cross-referenced them against the geographical coordinates of more than 8400 AI data centers. Recognizing that surface temperature could be affected by other factors, the researchers chose to focus their investigation on data centers located away from densely populated areas. They discovered that land surface temperatures increased by an average of 2C (3.6F) in the months after an AI data center started operations. In the most extreme cases, the increase in temperature was 9.1C (16.4F). The effect wasn't limited to the immediate surroundings of the data centers: the team found increased temperatures up to 10 kilometers away. Seven kilometers away, there was only a 30 percent reduction in the intensity. "The results we had were quite surprising," says Marinoni. "This could become a huge problem." Using population data, the researchers estimate that more than 340 million people live within 10 kilometers of data centers, so live in a place that is warmer than it would be if the data centre hadn't been built there. Marinoni says that areas including the Bajio region in Mexico and the Aragon province in Spain saw a 2C (3.6F) temperature increase in the 20 years between 2004 and 2024 that couldn't otherwise be explained. University of Bristol researcher Chris Preist said the findings may be more complicated than they look. "It would be worth doing follow-up research to understand to what extent it's the heat generated from computation versus the heat generated from the building itself," he says. For example, the building being heated by sunlight may be part of the effect. The findings of the study, which has not yet been peer-reviewed, can be found on arXiv.

Read more of this story at Slashdot.

Read the whole story
Share this story
Delete

After 16 Years and $8 Billion, the Military's New GPS Software Still Doesn't Work

1 Share
An anonymous reader quotes a report from Ars Technica: Last year, just before the Fourth of July holiday, the US Space Force officially took ownership of a new operating system for the GPS navigation network, raising hopes that one of the military's most troubled space programs might finally bear fruit. The GPS Next-Generation Operational Control System, or OCX, is designed for command and control of the military's constellation of more than 30 GPS satellites. It consists of software to handle new signals and jam-resistant capabilities of the latest generation of GPS satellites, GPS III, which started launching in 2018. The ground segment also includes two master control stations and upgrades to ground monitoring stations around the world, among other hardware elements. RTX Corporation, formerly known as Raytheon, won a Pentagon contract in 2010 to develop and deliver the control system. The program was supposed to be complete in 2016 at a cost of $3.7 billion. Today, the official cost for the ground system for the GPS III satellites stands at $7.6 billion. RTX is developing an OCX augmentation projected to cost more than $400 million to support a new series of GPS IIIF satellites set to begin launching next year, bringing the total effort to $8 billion. Although RTX delivered OCX to the Space Force last July, the ground segment remains nonoperational. Nine months later, the Pentagon may soon call it quits on the program. Thomas Ainsworth, assistant secretary of the Air Force for space acquisition and integration, told Congress last week that OCX is still struggling. The GAO found the OCX program was undermined by "poor acquisition decisions and a slow recognition of development problems." By 2016, it had blown past cost and schedule targets badly enough to trigger a Pentagon review for possible cancellation. Officials also pointed to cybersecurity software issues, a "persistently high software development defect rate," the government's lack of software expertise, and Raytheon's "poor systems engineering" practices. Even after the military restructured the program, it kept running into delays and overruns, with Ainsworth telling lawmakers, "It's a very stressing program" and adding, "We are still considering how to ensure we move forward."

Read more of this story at Slashdot.

Read the whole story
Share this story
Delete

Microsoft Copilot Is Now Injecting Ads Into Pull Requests On GitHub

2 Shares
Microsoft Copilot is reportedly injecting promotional "tips" into GitHub pull requests, with Neowin claiming more than 1.5 million PRs have been affected by messages advertising integrations like Raycast, Slack, Teams, and various IDEs. From the report: According to Melbourne-based software developer Zach Manson, a team member used the AI to fix a simple typo in a pull request. Copilot did the job, but it also took the liberty of editing the PR's description to include this message: "Quickly spin up Copilot coding agent tasks from anywhere on your macOS or Windows machine with Raycast." A quick search of that phrase on GitHub shows that the same promotional text appears in over 11,000 pull requests across thousands of repositories. Even merge requests on GitLab aren't safe from the injection. So what's happening? Well, Raycast has a Copilot extension that can do things like create pull requests from a natural language command. The ad directly names Raycast, so you might think that Raycast is injecting the promo into the PRs to market its own app. But it is more likely that Microsoft is the one doing the injecting. If you look at the raw markdown of the affected pull requests, there is a hidden HTML comment, "START COPILOT CODING AGENT TIPS" placed right just before the ad tip. This suggests Microsoft is using the comment to insert a "tip" that points back to its own developer ecosystem or partner integrations. UPDATE: Following backlash from developers, Microsoft has removed Copilot's ability to insert "tips" into pull requests. Tim Rogers, principal product manager for Copilot at GitHub, said the move was intended "to help developers learn new ways to use the agent in their workflow." "On reflection," Rogers said he has since realized that letting Copilot make changes to PRs written by a human without their knowledge "was the wrong judgement call."

Read more of this story at Slashdot.

Read the whole story
Share this story
Delete

A question about the maximimum number of values in a registry key raises questions about the question

2 Shares

A customer wanted to know the maximum number of values that can be stored in a single registry key. They found that they ran into problems when they reached a certain number of values, which was well over a quarter million.

Okay, wait a second. Why are you adding over a quarter million values to a registry key!?

The customer explained that they mark every file in their installer as msidb­Component­Attributes­Shared­Dll­Ref­Count, to avoid the problem described in the documentation. And when I said every file, I really meant every file. Not just DLLs, but also text files, GIFs, XML files, everything. Just the names of the keys adds up to over 30 megabytes.

Since their product supports multiple versions installed side-by-side, installing multiple versions of their product accumulates values in the HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\SharedDLLs registry key.

The customer saw the story about problems if you forget to mark a shared file as msidb­Component­Attributes­Shared­Dll­Ref­Count, and decided that they are going to fix it by saying that every single file should go into Shared­DLLs. But that’s the wrong lesson.

The lesson is “If a file is shared, then mark it as shared.” And “shared” means “multiple products use the same DLL installed into the same directory” (such as the system32 directory or the C:\Program Files\Common Files\Contoso\ directory). Since the customer says that their programs install side-by-side, there are unlikely to be any shared files at all! They probably can just remove the msidb­Component­Attributes­Shared­Dll­Ref­Count attribute from all of their files.

The SharedDLLs registry was created in Windows 95 as one of many attempts to address the problem of DLL management when multiple products all want to install the same DLL (for example, the C runtime library). Any DLL that was shared would be registered in the SharedDLLs registry key with a “usage count”. An installer would increment the count, and an uninstaller would decrement it.

Now, this addressed only the “keeping track of when it is safe to delete a DLL uninstalling” problem. It doesn’t do anything to solve the “multiple versions of the same DLL” problem. For that, the assumption was that (1) installers would compare the version number of the DLL already on the system with the version they want to install, and replace the existing file only if the new file is a higher version nunber; and with that policy, you also have (2) all future versions of a DLL are backward compatible with any earlier versions.

Now, that first rule is typically enforced by installers, though not always. But that second rule is harder to enforce because it relies on the developers who created the shared DLLs to understand the backward compatibility contraints that they operate under. If a newer version of the DLL is not compatible with the old one, then any programs that used the old version will break once a program is installed that replaces it the shared DLL with a newer version.

And from experience, we know that even the most harmless-looking change carries a risk that somebody was relying on the old behavior, perhaps entirely inadvertently, such as assuming that a function consumes only a specific amount of stack space and in particular leaves certain stack memory unmodified. This means that the simple act of adding a new local variable to your function is potentially a breaking change.

Nowadays, programs avoid this problem by trying to be more self-contained with few shared DLLs, and by using packaging systems liks MSIX to allow unrelated programs to share a common installation of popular DLLs, while still avoiding the “unwanted version upgrade” problem.

The post A question about the maximimum number of values in a registry key raises questions about the question appeared first on The Old New Thing.

Read the whole story
Share this story
Delete

Google details new 24-hour process to sideload unverified Android apps

1 Share

Google is planning big changes for Android in 2026 aimed at combating malware across the entire device ecosystem. Starting in September, Google will begin restricting application sideloading with its developer verification program, but not everyone is on board. Android Ecosystem President Sameer Samat tells Ars that the company has been listening to feedback, and the result is the newly unveiled advanced flow, which will allow power users to skip app verification.

With its new limits on sideloading, Android phones will only install apps that come from verified developers. To verify, devs releasing apps outside of Google Play will have to provide identification, upload a copy of their signing keys, and pay a $25 fee. It all seems rather onerous for people who just want to make apps without Google's intervention.

Apps that come from unverified developers won't be installable on Android phones—unless you use the new advanced flow, which will be buried in the developer settings.

When sideloading apps today, Android phones alert the user to the "unknown sources" toggle in the settings, and there's a flow to help you turn it on. The verification bypass is different and will not be revealed to users. You have to know where this is and proactively turn it on yourself, and it's not a quick process. Here are the steps:

  • Enable developer options by tapping the software build number in About Phone seven times
  • In Settings > System, open Developer Options and scroll down to "Allow Unverified Packages."
  • Flip the toggle and tap to confirm you are not being coerced
  • Enter device unlock pin/password
  • Restart your device
  • Wait 24 hours
  • Return to the unverified packages menu at the end of the security delay
  • Scroll past additional warnings and select either "Allow temporarily" (seven days) or "Allow indefinitely."
  • Check the box confirming you understand the risks.
  • You can now install unverified packages on the device by tapping the "Install anyway" option in the package manager.

The actual legwork to activate this feature only takes a few seconds, but the 24-hour countdown makes it something you cannot do spur of the moment. But why 24 hours? According to Samat, this is designed to combat the rising use of high-pressure social engineering attacks, in which the scammer convinces the victim they have to install an app immediately to avoid severe consequences.

bypass advanced flow UI You'll have to wait 24 hours to bypass verification. Credit: Google

"In that 24-hour period, we think it becomes much harder for attackers to persist their attack," said Samat. "In that time, you can probably find out that your loved one isn't really being held in jail or that your bank account isn't really under attack."

But for people who are sure they don't want Google's verification system to get in the way of sideloading any old APK they come across, they don't have to wait until they encounter an unverified app to get started. You only have to select the "indefinitely" option once on a phone, and you can turn dev options off again afterward.

Choice vs. security

According to Samat, Google feels a responsibility to Android users worldwide, and things are different than they used to be with more than 3 billion active devices out there.

"For a lot of people in the world, their phone is their only computer, and it stores some of their most private information," Samat said. "Over the years, we've evolved the platform to keep it open while also keeping it safe. And I want to emphasize, if the platform isn't safe, people aren't going to use it, and that's a lose-lose situation for everyone, including developers."

But what does that safety look like? Google swears it's not interested in the content of apps, and it won't be checking proactively when developers register. This is only about identity verification—you should know when you're installing an app that it's not an imposter and does not come from known purveyors of malware. If a verified developer distributes malware, they're unlikely to remain verified. And what is malware? For Samat, malware in the context of developer verification is an application package that "causes harm to the user's device or personal data that the user did not intend."

So a rootkit can be malware, but a rootkit you downloaded intentionally because you want root access on your phone is not malware, from Samat's perspective. Likewise, an alternative YouTube client that bypasses Google's ads and feature limits isn't causing the kind of harm that would lead to issues with verification. But these are just broad strokes; Google has not commented on any specific apps.

Google says sideloading isn't going away, but it is changing. Credit: Google

Google is proceeding cautiously with the verification rollout, and some details are still spotty. Privacy advocates have expressed concern that verification will create a database that puts independent developers at risk of legal action. Samat says that Google does push back on judicial orders for user data when they are improper. The company further suggests it's not intending to create a permanent list of developer identities that would be vulnerable to legal demands. We've asked for more detail on what data Google retains from the verification process and for what length of time.

There is also concern that developers living in sanctioned nations might be unable to verify due to the required fee. Google notes that the verification process may vary across countries and was not created specifically to bar developers in places like Cuba or Iran. We've asked for details on how Google will handle these edge cases and will update if we learn more.

Rolling out in 2026 and beyond

Android users in most of the world don't have to worry about developer verification yet, but that day is coming. In September, verification enforcement will begin in Brazil, Singapore, Indonesia, and Thailand. Impersonation and guided scams are more common in these regions, so Google is starting there before expanding verification globally next year. Google has stressed that the advanced flow will be available before the initial rollout in September.

Google stands by its assertion that users are 50 times more likely to get malware outside Google Play than in it. A big part of the gap, Samat says, is Google's decision in 2023 to begin verifying developer identities in the Play Store. This provided a framework for universal developer verification. While there are certainly reasons Google might like the control verification gives it, the Android team has felt real pressure from regulators in areas with malware issues to address platform security.

"In a lot of countries, there is chatter about if this isn't safer, then there may need to be regulatory action to lock down more of this stuff," Samat told Ars Technica. "I don't think that it's well understood that this is a real security concern in a number of countries."

Google has already started delivering the verifier to devices around the world—it's integrated with Android 16.1, which launched late in 2025. Eventually, the verifier and advanced flow will be on all currently supported Android devices. However, the UI will be consistent, with Google providing all the components and scare screens. So what you see here should be similar to what appears on your phone in a few months, regardless of who made it.

Read full article

Comments



Read the whole story
Share this story
Delete
Next Page of Stories